16 research outputs found

    Following the Robot? Investigating Users’ Utilization of Advice from Robo-Advisors

    Get PDF
    Companies are gradually creating new services such as robo-advisors (RA). However, little is known if users actually follow RA advice, how much the fit of RA to task requirements influences the utilization, how users perceive RA characteristics and if the perceived advisor’s expertise is influenced by the user’s expertise. Drawing on judge-advisor systems (JAS) and task-technology fit (TTF), we conducted an experimental study to measure actual advice-taking behavior in the context of RA. While the perceived advisor’s expertise is the most influential factor on task-advisor fit for RA and human advisors, integrity is a significant factor only for human advisors. However, for RA the user’s perception of the ability to make decisions efficiently is significant. In our study, users followed RA more than human advisors. Overall, our study connects JAS and TTF to predict advice utilization and supports companies in service promotion

    Whose Advice Counts More – Man or Machine? An Experimental Investigation of AI-based Advice Utilization

    Get PDF
    Due to advances in Artificial Intelligence (AI), it is possible to provide advisory services without human advisors. Derived from judge-advisor system literature, we examined differences in the advice utilization depending on whether it is given by an AI-based or human advisor and the similarity of the advice and their own estimation. Drawing on task-technology fit we investigated the relationship between task, advisor and advice utilization. In study A we measured the actual advice utilization within a guessing game and in study B we measured the perceived task-advisor fit for this game. The findings show that compared to human advisors, judges utilize advices of AI-based advisors more when the advice is similar to their own estimation. When the advice is very different to their estimation, the advices are used equally. Concluding, we investigated AI-based advice utilization and presented insights for professionals providing AI-based advisory services

    Crowdsourcing Data Science: A Qualitative Analysis of Organizations’ Usage of Kaggle Competitions

    Get PDF
    In light of the ongoing digitization, companies accumulate data, which they want to transform into value. However, data scientists are rare and organizations are struggling to acquire talents. At the same time, individuals who are interested in machine learning are participating in competitions on data science internet platforms. To investigate if companies can tackle their data science challenges by hosting data science competitions on internet platforms, we conducted ten interviews with data scientists. While there are various perceived benefits, such as discussing with participants and learning new, state of the art approaches, these competitions can only cover a fraction of tasks that typically occur during data science projects. We identified 12 factors within three categories that influence an organization’s perceived success when hosting a data science competition

    Adoption of AI-based Information Systems from an Organizational and User Perspective

    Get PDF
    Artificial intelligence (AI) is fundamentally changing our society and economy. Companies are investing a great deal of money and time into building corresponding competences and developing prototypes with the aim of integrating AI into their products and services, as well as enriching and improving their internal business processes. This inevitably brings corporate and private users into contact with a new technology that functions fundamentally differently than traditional software. The possibility of using machine learning to generate precise models based on large amounts of data capable of recognizing patterns within that data holds great economic and social potential—for example, in task augmentation and automation, medical diagnostics, and the development of pharmaceutical drugs. At the same time, companies and users are facing new challenges that accompany the introduction of this technology. Businesses are struggling to manage and generate value from big data, and employees fear increasing automation. To better prepare society for the growing market penetration of AI-based information systems into everyday life, a deeper understanding of this technology in terms of organizational and individual use is needed. Motivated by the many new challenges and questions for theory and practice that arise from AI-based information systems, this dissertation addresses various research questions with regard to the use of such information systems from both user and organizational perspectives. A total of five studies were conducted and published: two from the perspective of organizations and three among users. The results of these studies contribute to the current state of research and provide a basis for future studies. In addition, the gained insights enable recommendations to be derived for companies wishing to integrate AI into their products, services, or business processes. The first research article (Research Paper A) investigated which factors and prerequisites influence the success of the introduction and adoption of AI. Using the technology–organization–environment framework, various factors in the categories of technology, organization, and environment were identified and validated through the analysis of expert interviews with managers experienced in the field of AI. The results show that factors related to data (especially availability and quality) and the management of AI projects (especially project management and use cases) have been added to the framework, but regulatory factors have also emerged, such as the uncertainty caused by the General Data Protection Regulation. The focus of Research Paper B is companies’ motivation to host data science competitions on online platforms and which factors influence their success. Extant research has shown that employees with new skills are needed to carry out AI projects and that many companies have problems recruiting such employees. Therefore, data science competitions could support the implementation of AI projects via crowdsourcing. The results of the study (expert interviews among data scientists) show that these competitions offer many advantages, such as exchanges and discussions with experienced data scientists and the use of state-of-the-art approaches. However, only a small part of the effort related to AI projects can be represented within the framework of such competitions. The studies in the other three research papers (Research Papers C, D, and E) examine AI-based information systems from a user perspective, with two studies examining user behavior and one focusing on the design of an AI-based IT artifact. Research Paper C analyses perceptions of AI-based advisory systems in terms of the advantages associated with their use. The results of the empirical study show that the greatest perceived benefit is the convenience such systems provide, as they are easy to access at any time and can immediately satisfy informational needs. Furthermore, this study examined the effectiveness of 11 different measures to increase trust in AI-based advisory systems. This showed a clear ranking of measures, with effectiveness decreasing from non-binding testing to providing additional information regarding how the system works to adding anthropomorphic features. The goal of Research Paper D was to investigate actual user behavior when interacting with AI-based advisory systems. Based on the theoretical foundations of task–technology fit and judge–advisor systems, an online experiment was conducted. The results show that, above all, perceived expertise and the ability to make efficient decisions through AI-based advisory systems influence whether users assess these systems as suitable for supporting certain tasks. In addition, the study provides initial indications that users might be more willing to follow the advice of AI-based systems than that of human advisors. Finally, Research Paper E designs and implements an IT artifact that uses machine learning techniques to support structured literature reviews. Following the approach of design science research, an artifact was iteratively developed that can automatically download research articles from various databases and analyze and group them according to their content using the word2vec algorithm, the latent Dirichlet allocation model, and agglomerative hierarchical cluster analysis. An evaluation of the artifact on a dataset of 308 publications shows that it can be a helpful tool to support literature reviews but that much manual effort is still required, especially with regard to the identification of common concepts in extant literature

    Towards an Integrative Approach for Automated Literature Reviews Using Machine Learning

    Get PDF
    Due to a huge amount of scientific publications which are mostly stored as unstructured data, complexity and workload of the fundamental process of literature reviews increase constantly. Based on previous literature, we develop an artifact that partially automates the literature review process from collecting articles up to their evaluation. This artifact uses a custom crawler, the word2vec algorithm, LDA topic modeling, rapid automatic keyword extraction, and agglomerative hierarchical clustering to enable the automatic acquisition, processing, and clustering of relevant literature and subsequent graphical presentation of the results using illustrations such as dendrograms. Moreover, the artifact provides information on which topics each cluster addresses and which keywords they contain. We evaluate our artifact based on an exemplary set of 308 publications. Our findings indicate that the developed artifact delivers better results than known previous approaches and can be a helpful tool to support researchers in conducting literature reviews

    The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics

    Get PDF
    Schons M, Pilgram L, Reese J-P, et al. The German National Pandemic Cohort Network (NAPKON): rationale, study design and baseline characteristics. European Journal of Epidemiology . 2022.The German government initiated the Network University Medicine (NUM) in early 2020 to improve national research activities on the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) pandemic. To this end, 36 German Academic Medical Centers started to collaborate on 13 projects, with the largest being the National Pandemic Cohort Network (NAPKON). The NAPKON's goal is creating the most comprehensive Coronavirus Disease 2019 (COVID-19) cohort in Germany. Within NAPKON, adult and pediatric patients are observed in three complementary cohort platforms (Cross-Sectoral, High-Resolution and Population-Based) from the initial infection until up to three years of follow-up. Study procedures comprise comprehensive clinical and imaging diagnostics, quality-of-life assessment, patient-reported outcomes and biosampling. The three cohort platforms build on four infrastructure core units (Interaction, Biosampling, Epidemiology, and Integration) and collaborations with NUM projects. Key components of the data capture, regulatory, and data privacy are based on the German Centre for Cardiovascular Research. By April 01, 2022, 34 university and 40 non-university hospitals have enrolled 5298 patients with local data quality reviews performed on 4727 (89%). 47% were female, the median age was 52 (IQR 36-62-) and 50 pediatric cases were included. 44% of patients were hospitalized, 15% admitted to an intensive care unit, and 12% of patients deceased while enrolled. 8845 visits with biosampling in 4349 patients were conducted by April 03, 2022. In this overview article, we summarize NAPKON's design, relevant milestones including first study population characteristics, and outline the potential of NAPKON for German and international research activities.Trial registration https://clinicaltrials.gov/ct2/show/NCT04768998 . https://clinicaltrials.gov/ct2/show/NCT04747366 . https://clinicaltrials.gov/ct2/show/NCT04679584. © 2022. The Author(s)

    Genomic investigations of unexplained acute hepatitis in children

    Get PDF
    Since its first identification in Scotland, over 1,000 cases of unexplained paediatric hepatitis in children have been reported worldwide, including 278 cases in the UK1. Here we report an investigation of 38 cases, 66 age-matched immunocompetent controls and 21 immunocompromised comparator participants, using a combination of genomic, transcriptomic, proteomic and immunohistochemical methods. We detected high levels of adeno-associated virus 2 (AAV2) DNA in the liver, blood, plasma or stool from 27 of 28 cases. We found low levels of adenovirus (HAdV) and human herpesvirus 6B (HHV-6B) in 23 of 31 and 16 of 23, respectively, of the cases tested. By contrast, AAV2 was infrequently detected and at low titre in the blood or the liver from control children with HAdV, even when profoundly immunosuppressed. AAV2, HAdV and HHV-6 phylogeny excluded the emergence of novel strains in cases. Histological analyses of explanted livers showed enrichment for T cells and B lineage cells. Proteomic comparison of liver tissue from cases and healthy controls identified increased expression of HLA class 2, immunoglobulin variable regions and complement proteins. HAdV and AAV2 proteins were not detected in the livers. Instead, we identified AAV2 DNA complexes reflecting both HAdV-mediated and HHV-6B-mediated replication. We hypothesize that high levels of abnormal AAV2 replication products aided by HAdV and, in severe cases, HHV-6B may have triggered immune-mediated hepatic disease in genetically and immunologically predisposed children

    A NEW ORGANIZATIONAL CHASSIS FOR ARTIFICIAL INTELLIGENCE - EXPLORING ORGANIZATIONAL READINESS FACTORS

    No full text
    In 2018, investments in AI rapidly increased by over 50 percent compared to the previous year and reached 19.1 billion USD. However, little is known about the necessary AI-specific requirements or readiness factors to ensure a successful organizational implementation of this technological innovation. Additionally, extant IS research has largely overlooked the possible strategic impact on processes, structures, and management of AI investments. Drawing on TOE framework, different factors are identified and then validated conducting 12 expert interviews with 14 interviewees regarding their applicability on the adoption process of artificial intelligence. The results strongly suggest that the general TOE framework, which has been applied to other technologies such as cloud computing, needs to be revisited and extended to be used in this specific context. Exemplary, new factors emerged which include data – in particular, availability, quality and protection of data – as well as regulatory issues arising from the newly introduced GDPR. Our study thus provides an expanded TOE framework adapted to the specific requirements of artificial intelligence adoption as well as 12 propositions regarding the particular effects of the suggested factors, which could serve as a basis for future AI adoption research and guide managerial decision-making

    Data-Assisted Value Stream Method

    No full text

    Promoting Trust in AI-based Expert Systems

    No full text
    Recent advantages in artificial intelligence (AI) research allow building sophisticated models to advise users in various scenarios (e.g., in financial planning, medical diagnosis). For companies, this development is relevant since it allows scaling of services that were not scalable before. Nonetheless, in the end, the user decides whether he/she uses a service or not. Therefore, we conducted a survey with 226 participants to measure the relative advantage of AI-based advisory over human experts in the context of financial planning. The results show that the most important advantage users perceive is convenience, since they get easy and instant satisfaction of their informational needs. Furthermore, the effectivity of eleven measures to increase trust in AI-based advisory systems was evaluated. Findings show that the ability to test the service noncommittal is superior while the implementation of human traits is negligible
    corecore